57 research outputs found

    A demonstration of 'broken' visual space

    Get PDF
    It has long been assumed that there is a distorted mapping between real and ‘perceived’ space, based on demonstrations of systematic errors in judgements of slant, curvature, direction and separation. Here, we have applied a direct test to the notion of a coherent visual space. In an immersive virtual environment, participants judged the relative distance of two squares displayed in separate intervals. On some trials, the virtual scene expanded by a factor of four between intervals although, in line with recent results, participants did not report any noticeable change in the scene. We found that there was no consistent depth ordering of objects that can explain the distance matches participants made in this environment (e.g. A > B > D yet also A < C < D) and hence no single one-to-one mapping between participants’ perceived space and any real 3D environment. Instead, factors that affect pairwise comparisons of distances dictate participants’ performance. These data contradict, more directly than previous experiments, the idea that the visual system builds and uses a coherent 3D internal representation of a scene

    Effects of exposure to facial expression variation in face learning and recognition.

    Get PDF
    Facial expression is a major source of image variation in face images. Linking numerous expressions to the same face can be a huge challenge for face learning and recognition. It remains largely unknown what level of exposure to this image variation is critical for expression-invariant face recognition. We examined this issue in a recognition memory task, where the number of facial expressions of each face being exposed during a training session was manipulated. Faces were either trained with multiple expressions or a single expression, and they were later tested in either the same or different expressions. We found that recognition performance after learning three emotional expressions had no improvement over learning a single emotional expression (Experiments 1 and 2). However, learning three emotional expressions improved recognition compared to learning a single neutral expression (Experiment 3). These findings reveal both the limitation and the benefit of multiple exposures to variations of emotional expression in achieving expression-invariant face recognition. The transfer of expression training to a new type of expression is likely to depend on a relatively extensive level of training and a certain degree of variation across the types of expressions

    Insect Brains Use Image Interpolation Mechanisms to Recognise Rotated Objects

    Get PDF
    Recognising complex three-dimensional objects presents significant challenges to visual systems when these objects are rotated in depth. The image processing requirements for reliable individual recognition under these circumstances are computationally intensive since local features and their spatial relationships may significantly change as an object is rotated in the horizontal plane. Visual experience is known to be important in primate brains learning to recognise rotated objects, but currently it is unknown how animals with comparatively simple brains deal with the problem of reliably recognising objects when seen from different viewpoints. We show that the miniature brain of honeybees initially demonstrate a low tolerance for novel views of complex shapes (e.g. human faces), but can learn to recognise novel views of stimuli by interpolating between or ‘averaging’ views they have experienced. The finding that visual experience is also important for bees has important implications for understanding how three dimensional biologically relevant objects like flowers are recognised in complex environments, and for how machine vision might be taught to solve related visual problems

    Learning to Use Illumination Gradients as an Unambiguous Cue to Three Dimensional Shape

    Get PDF
    The luminance and colour gradients across an image are the result of complex interactions between object shape, material and illumination. Using such variations to infer object shape or surface colour is therefore a difficult problem for the visual system. We know that changes to the shape of an object can affect its perceived colour, and that shading gradients confer a sense of shape. Here we investigate if the visual system is able to effectively utilise these gradients as a cue to shape perception, even when additional cues are not available. We tested shape perception of a folded card object that contained illumination gradients in the form of shading and more subtle effects such as inter-reflections. Our results suggest that observers are able to use the gradients to make consistent shape judgements. In order to do this, observers must be given the opportunity to learn suitable assumptions about the lighting and scene. Using a variety of different training conditions, we demonstrate that learning can occur quickly and requires only coarse information. We also establish that learning does not deliver a trivial mapping between gradient and shape; rather learning leads to the acquisition of assumptions about lighting and scene parameters that subsequently allow for gradients to be used as a shape cue. The perceived shape is shown to be consistent for convex and concave versions of the object that exhibit very different shading, and also similar to that delivered by outline, a largely unrelated cue to shape. Overall our results indicate that, although gradients are less reliable than some other cues, the relationship between gradients and shape can be quickly assessed and the gradients therefore used effectively as a visual shape cue

    The Invariance Hypothesis Implies Domain-Specific Regions in Visual Cortex

    Get PDF
    Is visual cortex made up of general-purpose information processing machinery, or does it consist of a collection of specialized modules? If prior knowledge, acquired from learning a set of objects is only transferable to new objects that share properties with the old, then the recognition system’s optimal organization must be one containing specialized modules for different object classes. Our analysis starts from a premise we call the invariance hypothesis: that the computational goal of the ventral stream is to compute an invariant-to-transformations and discriminative signature for recognition. The key condition enabling approximate transfer of invariance without sacrificing discriminability turns out to be that the learned and novel objects transform similarly. This implies that the optimal recognition system must contain subsystems trained only with data from similarly-transforming objects and suggests a novel interpretation of domain-specific regions like the fusiform face area (FFA). Furthermore, we can define an index of transformation-compatibility, computable from videos, that can be combined with information about the statistics of natural vision to yield predictions for which object categories ought to have domain-specific regions in agreement with the available data. The result is a unifying account linking the large literature on view-based recognition with the wealth of experimental evidence concerning domain-specific regions.National Science Foundation (U.S.). Science and Technology Center (Award CCF-1231216)National Science Foundation (U.S.) (Grant NSF-0640097)National Science Foundation (U.S.) (Grant NSF-0827427)United States. Air Force Office of Scientific Research (Grant FA8650-05-C-7262)Eugene McDermott Foundatio

    The effect of landmark and body-based sensory information on route knowledge

    No full text
    Two experiments investigated the effects of landmarks and body-based information on route knowledge. Participants made four out-and-back journeys along a route, guided only on the first outward trip and with feedback every time an error was made. Experiment 1 used 3-D virtual environments (VEs) with a desktop monitor display, and participants were provided with no supplementary landmarks, only global landmarks, only local landmarks, or both global and local landmarks. Local landmarks significantly reduced the number of errors that participants made, but global landmarks did not. Experiment 2 used a head-mounted display; here, participants who physically walked through the VE (translational and rotational body-based information) made 36% fewer errors than did participants who traveled by physically turning but changing position using a joystick. Overall, the experiments showed that participants were less sure of where to turn than which way, and journey direction interacted with sensory information to affect the number and types of errors participants made

    Parametric animacy percept evoked by a single moving dot mimicking natural stimuli

    Get PDF
    Identifying moving things in the environment is a priority for animals as these could be prey, predators, or mates. When the shape of a moving object is hard to see, motion becomes an important cue to distinguish animate from inanimate things. We report a new stimulus in which a single moving dot evokes a reasonably strong percept of animacy by mimicking the motion of naturally occurring stimuli, with minimal context information. Stimulus movements are controlled by an equation such that changes in a single movement parameter lead to gradual changes in animacy judgments with minimal changes in low-level stimulus properties. An infinite number of stimuli can be created between the animate and inanimate extremes. A series of experiments confirm the strength of the percept and show that observers tend to follow the stimulus with their eye gaze. However, eye movements are not necessary for perceptual judgments, as forced fixation on the display center only slightly reduces the amplitude of percept changes. Withdrawing attentional resources from the animacy judgment using a simultaneous secondary task further reduces percept amplitudes without abolishing them. This stimulus could open new avenues for the principled study of animacy judgments based on object motion only

    A chimeric point-light walker

    No full text

    Admittance-based bilateral teleoperation with time delay for an Unmanned Aerial Vehicle involved in an obstacle avoidance task

    No full text
    The paper focuses on the implementation of an admittance based control scheme in a bilateral teleoperation set-up for an Unmanned Aerial Vehicle (UAV) under time delay. The goal of this study is to assess and improve the stability characteristics of the bilateral teleoperator. Computer simulations were conducted to evaluate the effectiveness of the admittance-based control scheme. A commercial impedance-like haptic device was chosen to simulate the control stick: the master device. The slave system is constituted by the dynamics of the aircraft under control; in order to maximize the pilot attention on its task, only the lateral aircraft dynamics was considered. A virtual environment was displayed during the experiments to produce the visual cues. In order to evaluate the system, we prepared a control task where the aircraft had to be flown in a virtual urban canyon with buildings placed irregularly (non Manhattan-like) along the desired path by avoiding the collisions with them. A repulsive force field was associated to the obstacles and a force was sent back to the operator through the communication link. A compensator capable of flying autonomously the aircraft through the buildings with satisfactory performance was designed first using linear techniques then the haptic augmentation system was derived from the compensator by splitting it in two parts: the actual haptic cueing for the pilot and the simulated the pilot effort. The latter component was used only for the preliminary assessment of the system and was removed in simulations where a real pilot operated the stick (the master device). Experimental results and analytical motivations as well have shown that a haptic force which is simply proportional to the distance from the obstacles cannot stabilize the system: a relevant anticipatory effect or phase lead (as the derivative effect of standard industrial controllers) is needed. In order to manage the degradation of performance and overall stability when a delay is present in the communication paths, an admittance-based controller was designed together with an observer for the force generated by the human operator on the stick. The admittance-based Force Position teleoperation scheme was shown by simulations and tests with real pilots to improve the performance of the system under consideration
    • 

    corecore